Goto

Collaborating Authors

 autonomous weapon


Syria's leader says his country has transformed from 'an exporter of crisis.'

NYT > Middle East

On Wednesday, officials and diplomats sounded the alarm on A.I.'s ability to undermine the integrity of information and fabricate fake voice and video tapes. They also warned that it posed a threat to cybersecurity and would enable the rise of autonomous weapons. Still, some argued that, if used responsibly and with guardrails, A.I. potentially could also help foster peace and stability. Secretary General António Guterres, who for the past year has championed efforts to regulate A.I., said that the Council had a responsibility to ensure the military use of artificial intelligence complies with international law and the U.N. Charter. "From design to deployment to decommissioning, A.I. systems must always comply with international law; military uses must be clearly regulated," Mr. Guterres said, before ending his speech with a warning and a call to action.


Fox News AI Newsletter: Holy See calls for end to autonomous weapons

FOX News

Fox News chief political anchor Bret Baier has the latest on the pros and cons of the bombshell developments on'Special Report.' The Vatican flag flies outside the United Nations headquarters on Sept. 25, 2015, in New York City. 'PROPER HUMAN CONTROL': A delegation representing the Holy See urged the United Nations this week to put a moratorium on autonomous weapons designed to kill without human decision-making. 'INSANE': Canva is facing pushback from customers over plans to increase subscription prices by more than 300% in some instances. United Nations Headquarters in New York City is seen flanked by Hamas and Hezbollah fighters.


AI's 'Oppenheimer moment': autonomous weapons enter the battlefield

The Guardian

A squad of soldiers is under attack and pinned down by rockets in the close quarters of urban combat. One of them makes a call over his radio, and within moments a fleet of small autonomous drones equipped with explosives fly through the town square, entering buildings and scanning for enemies before detonating on command. One by one the suicide drones seek out and kill their targets. A voiceover on the video, a fictional ad for multibillion-dollar Israeli weapons company Elbit Systems, touts the AI-enabled drones' ability to "maximize lethality and combat tempo". While defense companies like Elbit promote their new advancements in artificial intelligence (AI) with sleek dramatizations, the technology they are developing is increasingly entering the real world.


The Promise and Peril of AI

TIME - Tech

In early 2023, following an international conference that included dialogue with China, the United States released a "Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy," urging states to adopt sensible policies that include ensuring ultimate human control over nuclear weapons. Yet the notion of "human control" itself is hazier than it might seem. If humans authorized a future AI system to "stop an incoming nuclear attack," how much discretion should it have over how to do so? The challenge is that an AI general enough to successfully thwart such an attack could also be used for offensive purposes. We need to recognize the fact that AI technologies are inherently dual-use.


ChatGPT maker quietly changes rules to allow the US military to incorporate its technology

Daily Mail - Science & tech

OpenAI, the maker of ChatGPT, has quietly changed its rules and removed a ban on using the chatbot and its other AI tools for military purposes - and revealed that it is already working with the Department of Defense. Experts have previously voiced fears that AI could escalate conflicts around the world thanks to'slaughterbots' which can kill without any human intervention. The rule change, which occurred after Wednesday last week, removed a sentence which said that the company would not permit usage of models for'activity that has high risk of physical harm, including: weapons development, military and warfare.' The spokesman said: 'Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. 'There are, however, national security use cases that align with our mission.


Pentagon moving to ensure human control so AI doesn't 'make the decision for us'

FOX News

Naftali Bennett spoke exclusively with Fox News Digital about the benefits of AI and the need to set parameters for its use now. The U.S. military is embracing artificial intelligence as a tool for quickly digesting data and helping leaders make the right decision – and not to make those decisions for the humans in charge, according to two top AI advisors in U.S. Central Command. CENTCOM, which is tasked with safeguarding U.S. national security in the Middle East and Southeast Asia, just hired Dr. Andrew Moore as its first AI advisor. Moore is the former director of Google Cloud AI and former dean of the Carnegie Mellon University School of Computer Science, and he'll be working with Schuyler Moore, CENTCOM's chief technology officer. In an interview with Fox News Digital, they both agreed that while some are imagining AI-driven weapons, the U.S. military aims to keep humans in the decision-making seat, and using AI to assess massive amounts of data that helps the people sitting in those seats.


'Eyes and ears': Could drones prove decisive in the Ukraine war?

Al Jazeera

Warning: Some readers may find some of the scenes described in this article disturbing. Kyiv, Ukraine – Ivan Ukraintsev, a stern-faced insurance broker turned director of a wartime charity providing crucial aid to Ukraine's military forces, is on a mission: to help Ukraine win the drone war. He is a polite but no-nonsense character, and he is here to talk about drones. "If we [Ukraine] had enough drones, we could end this war in two months," he says firmly. Ivan, who heads up the charity Starlife, had recently returned from overseeing a drone delivery to Bakhmut, a city in eastern Ukraine that has become the focal point for months of bloody battles between Ukrainian and Russian forces. Trench warfare, pockmarked and corpse-ridden swathes of no man's land, and constant artillery bombardments have drawn comparisons to battlefield conditions during World War I.


HELPFUL OR HOMICIDAL -- HOW DANGEROUS IS ARTIFICIAL INTELLIGENCE (AI)? - Dying Words

#artificialintelligence

AI is great for what I do--create content for the entertainment industry--and I have no plans to use AI for world domination. Not like a character I'm basing-on for my new series titled City Of Danger. It's a work in progress set for release this fall--2022. I didn't invent the character. I didn't have to because he exists in real life, and he's a mover and shaker behind many world economic and technological advances including promoting artificial intelligence. His name is Klaus Schwab.


Should Algorithms Control Nuclear Weapons Launch Codes? The US Says No

WIRED

Last Thursday, the US State Department outlined a new vision for developing, testing, and verifying military systems--including weapons--that make use of AI. The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy represents an attempt by the US to guide the development of military AI at a crucial time for the technology. The document does not legally bind the US military, but the hope is that allied nations will agree to its principles, creating a kind of global standard for building AI systems responsibly. Among other things, the declaration states that military AI needs to be developed according to international laws, that nations should be transparent about the principles underlying their technology, and that high standards are implemented for verifying the performance of AI systems. It also says that humans alone should make decisions around the use of nuclear weapons.


US launches artificial intelligence military use initiative - ABC News

#artificialintelligence

The United States launched an initiative Thursday promoting international cooperation on the responsible use of artificial intelligence and autonomous weapons by militaries, seeking to impose order on an emerging technology that has the potential to change the way war is waged. "As a rapidly changing technology, we have an obligation to create strong norms of responsible behavior concerning military uses of AI and in a way that keeps in mind that applications of AI by militaries will undoubtedly change in the coming years," Bonnie Jenkins, the State Department's under secretary for arms control and international security, said. She said the U.S. political declaration, which contains non-legally binding guidelines outlining best practices for responsible military use of AI, "can be a focal point for international cooperation." Jenkins launched the declaration at the end of a two-day conference in The Hague that took on additional urgency as advances in drone technology amid the Russia's war in Ukraine have accelerated a trend that could soon bring the world's first fully autonomous fighting robots to the battlefield. The U.S. declaration has 12 points, including that military uses of AI are consistent with international law, and that states "maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment."